459 research outputs found

    Discussion of: Brownian distance covariance

    Full text link
    Discussion on "Brownian distance covariance" by G\'abor J. Sz\'ekely and Maria L. Rizzo [arXiv:1010.0297]Comment: Published in at http://dx.doi.org/10.1214/09-AOAS312F the Annals of Applied Statistics (http://www.imstat.org/aoas/) by the Institute of Mathematical Statistics (http://www.imstat.org

    ICA and Kernel Distribution Testing

    Get PDF

    Low-frequency local field potentials and spikes in primary visual cortex convey independent visual information

    Get PDF
    Local field potentials (LFPs) reflect subthreshold integrative processes that complement spike train measures. However, little is yet known about the differences between how LFPs and spikes encode rich naturalistic sensory stimuli. We addressed this question by recording LFPs and spikes from the primary visual cortex of anesthetized macaques while presenting a color movie.Wethen determined how the power of LFPs and spikes at different frequencies represents the visual features in the movie.Wefound that the most informative LFP frequency ranges were 1– 8 and 60 –100 Hz. LFPs in the range of 12– 40 Hz carried little information about the stimulus, and may primarily reflect neuromodulatory inputs. Spike power was informative only at frequencies <12 Hz. We further quantified “signal correlations” (correlations in the trial-averaged power response to different stimuli) and “noise correlations” (trial-by-trial correlations in the fluctuations around the average) of LFPs and spikes recorded from the same electrode. We found positive signal correlation between high-gamma LFPs (60 –100 Hz) and spikes, as well as strong positive signal correlation within high-gamma LFPs, suggesting that high-gamma LFPs and spikes are generated within the same network. LFPs<24 Hz shared strong positive noise correlations, indicating that they are influenced by a common source, such as a diffuse neuromodulatory input. LFPs<40 Hz showed very little signal and noise correlations with LFPs>40Hzand with spikes, suggesting that low-frequency LFPs reflect neural processes that in natural conditions are fully decoupled from those giving rise to spikes and to high-gamma LFPs

    Learning the Roots of Visual Domain Shift

    Get PDF
    In this paper we focus on the spatial nature of visual domain shift, attempting to learn where domain adaptation originates in each given image of the source and target set. We borrow concepts and techniques from the CNN visualization literature, and learn domainnes maps able to localize the degree of domain specificity in images. We derive from these maps features related to different domainnes levels, and we show that by considering them as a preprocessing step for a domain adaptation algorithm, the final classification performance is strongly improved. Combined with the whole image representation, these features provide state of the art results on the Office dataset.Comment: Extended Abstrac

    A Weaker Faithfulness Assumption based on Triple Interactions

    Get PDF
    One of the core assumptions in causal discovery is the faithfulness assumption---i.e. assuming that independencies found in the data are due to separations in the true causal graph. This assumption can, however, be violated in many ways, including xor connections, deterministic functions or cancelling paths. In this work, we propose a weaker assumption that we call 2-adjacency faithfulness. In contrast to adjacency faithfulness, which assumes that there is no conditional independence between each pair of variables that are connected in the causal graph, we only require no conditional independence between a node and a subset of its Markov blanket that can contain up to two nodes. Equivalently, we adapt orientation faithfulness to this setting. We further propose a sound orientation rule for causal discovery that applies under weaker assumptions. As a proof of concept, we derive a modified Grow and Shrink algorithm that recovers the Markov blanket of a target node and prove its correctness under strictly weaker assumptions than the standard faithfulness assumption

    A Kernel Test of Goodness of Fit

    Get PDF
    We propose a nonparametric statistical test for goodness-of-fit: given a set of samples, the test determines how likely it is that these were generated from a target density function. The measure of goodness-of-fit is a divergence constructed via Stein's method using functions from a Reproducing Kernel Hilbert Space. Our test statistic is based on an empirical estimate of this divergence, taking the form of a V-statistic in terms of the log gradients of the target density and the kernel. We derive a statistical test, both for i.i.d. and non-i.i.d. samples, where we estimate the null distribution quantiles using a wild bootstrap procedure. We apply our test to quantifying convergence of approximate Markov Chain Monte Carlo methods, statistical model criticism, and evaluating quality of fit vs model complexity in nonparametric density estimation

    A Kernel Test for Three-Variable Interactions with Random Processes

    Get PDF
    We apply a wild bootstrap method to the Lancaster three-variable interaction measure in order to detect factorisation of the joint distribution on three variables forming a stationary random process, for which the existing permutation bootstrap method fails. As in the i.i.d. case, the Lancaster test is found to outperform existing tests in cases for which two independent variables individually have a weak influence on a third, but that when considered jointly the influence is strong. The main contributions of this paper are twofold: first, we prove that the Lancaster statistic satisfies the conditions required to estimate the quantiles of the null distribution using the wild bootstrap; second, the manner in which this is proved is novel, simpler than existing methods, and can further be applied to other statistics

    Maximum Mean Discrepancy Gradient Flow

    Get PDF
    We construct a Wasserstein gradient flow of the maximum mean discrepancy (MMD) and study its convergence properties. The MMD is an integral probability metric defined for a reproducing kernel Hilbert space (RKHS), and serves as a metric on probability measures for a sufficiently rich RKHS. We obtain conditions for convergence of the gradient flow towards a global optimum, that can be related to particle transport when optimizing neural networks. We also propose a way to regularize this MMD flow, based on an injection of noise in the gradient. This algorithmic fix comes with theoretical and empirical evidence. The practical implementation of the flow is straightforward, since both the MMD and its gradient have simple closed-form expressions, which can be easily estimated with samples
    corecore